388 research outputs found

    Clinical Assistant Diagnosis for Electronic Medical Record Based on Convolutional Neural Network

    Full text link
    Automatically extracting useful information from electronic medical records along with conducting disease diagnoses is a promising task for both clinical decision support(CDS) and neural language processing(NLP). Most of the existing systems are based on artificially constructed knowledge bases, and then auxiliary diagnosis is done by rule matching. In this study, we present a clinical intelligent decision approach based on Convolutional Neural Networks(CNN), which can automatically extract high-level semantic information of electronic medical records and then perform automatic diagnosis without artificial construction of rules or knowledge bases. We use collected 18,590 copies of the real-world clinical electronic medical records to train and test the proposed model. Experimental results show that the proposed model can achieve 98.67\% accuracy and 96.02\% recall, which strongly supports that using convolutional neural network to automatically learn high-level semantic features of electronic medical records and then conduct assist diagnosis is feasible and effective.Comment: 9 pages, 4 figures, Accepted by Scientific Report

    The current opportunities and challenges of Web 3.0

    Full text link
    With recent advancements in AI and 5G technologies,as well as the nascent concepts of blockchain and metaverse,a new revolution of the Internet,known as Web 3.0,is emerging. Given its significant potential impact on the internet landscape and various professional sectors,Web 3.0 has captured considerable attention from both academic and industry circles. This article presents an exploratory analysis of the opportunities and challenges associated with Web 3.0. Firstly, the study evaluates the technical differences between Web 1.0, Web 2.0, and Web 3.0, while also delving into the unique technical architecture of Web 3.0. Secondly, by reviewing current literature, the article highlights the current state of development surrounding Web 3.0 from both economic and technological perspective. Thirdly, the study identifies numerous research and regulatory obstacles that presently confront Web 3.0 initiatives. Finally, the article concludes by providing a forward-looking perspective on the potential future growth and progress of Web 3.0 technology

    In-N-Out: Face Video Inversion and Editing with Volumetric Decomposition

    Full text link
    3D-aware GANs offer new capabilities for creative content editing, such as view synthesis, while preserving the editing capability of their 2D counterparts. These methods use GAN inversion to reconstruct images or videos by optimizing a latent code, allowing for semantic editing by manipulating the code. However, a model pre-trained on a face dataset (e.g., FFHQ) often has difficulty handling faces with out-of-distribution (OOD) objects, e.g., heavy make-up or occlusions. We address this issue by explicitly modeling OOD objects in face videos. Our core idea is to represent the face in a video using two neural radiance fields, one for the in-distribution and the other for the out-of-distribution object, and compose them together for reconstruction. Such explicit decomposition alleviates the inherent trade-off between reconstruction fidelity and editability. We evaluate our method's reconstruction accuracy and editability on challenging real videos and showcase favorable results against other baselines.Comment: Project page: https://in-n-out-3d.github.io

    TinyHAR: A Lightweight Deep Learning Model Designed for Human Activity Recognition

    Get PDF
    Deep learning models have shown excellent performance in human activity recognition tasks. However, these models typically require large amounts of computational resources, which makes them inefficient to deploy on edge devices. Furthermore, the superior performance of deep learning models relies heavily on the availability of large datasets to avoid over-fitting. However, the expensive efforts for labeling limits the amount of datasets. We address both challenges by designing a more lightweight model, called TinyHAR. TinyHAR is designed specifically for human activity recognition employing different saliency of multi modalities, multimodal collaboration, and temporal information extraction. Initial experimental results show that TinyHAR is several times smaller and often meets or even surpasses the performance of DeepConvLSTM, a state-of-the-art human activity recognition model

    Automatic Feature Engineering through Monte Carlo Tree Search

    Get PDF
    The performance of machine learning models depends heavily on the feature space and feature engineering. Although neural networks have made significant progress in learning latent feature spaces from data, compositional feature engineering through nested feature transformations can reduce model complexity and can be particularly desirable for interpretability. To find suitable transformations automatically, state-of-the-art methods model the feature transformation space by graph structures and use heuristics such as ϵ\epsilon-greedy to search for them. Such search strategies tend to become less efficient over time because they do not consider the sequential information of the candidate sequences and cannot dynamically adjust the heuristic strategy. To address these shortcomings, we propose a reinforcement learning-based automatic feature engineering method, which we call Monte Carlo tree search Automatic Feature Engineering (mCAFE). We employ a surrogate model that can capture the sequential information contained in the transformation sequence and thus can dynamically adjust the exploration strategy. It balances exploration and exploitation by Thompson sampling and uses a Long Short Term Memory (LSTM) based surrogate model to estimate sequences of promising transformations. In our experiments, mCAFE outperformed state-of-the-art automatic feature engineering methods on most common benchmark datasets
    • …
    corecore